为多个机器人制定安全,稳定和高效的避免障碍政策是具有挑战性的。大多数现有研究要么使用集中控制,要么需要与其他机器人进行通信。在本文中,我们提出了一种基于对数地图的新型对数深度强化学习方法,以避免复杂且无通信的多机器人方案。特别是,我们的方法将激光信息转换为对数图。为了提高训练速度和概括性能,我们的政策将在两个专门设计的多机器人方案中进行培训。与其他方法相比,对数图可以更准确地表示障碍,并提高避免障碍的成功率。我们最终在各种模拟和现实情况下评估了我们的方法。结果表明,我们的方法为复杂的多机器人场景和行人场景中的机器人提供了一种更稳定,更有效的导航解决方案。视频可在https://youtu.be/r0esuxe6mze上找到。
translated by 谷歌翻译
胰腺癌是世界上最严重恶性的癌症之一,这种癌症迅速迅速,具有很高的死亡率。快速的现场评估(玫瑰)技术通过立即分析与现场病理学家的快速染色的细胞影析学形象来创新工作流程,这使得在这种紧压的过程中能够更快的诊断。然而,由于缺乏经验丰富的病理学家,玫瑰诊断的更广泛的扩张已经受到阻碍。为了克服这个问题,我们提出了一个混合高性能深度学习模型,以实现自动化工作流程,从而释放占据病理学家的宝贵时间。通过使用我们特定的多级混合设计将变压器块引入该字段,由卷积神经网络(CNN)产生的空间特征显着增强了变压器全球建模。转向多级空间特征作为全球关注指导,这种设计将鲁棒性与CNN的感应偏差与变压器的复杂全球建模功能相结合。收集4240朵Rose图像的数据集以评估此未开发领域的方法。所提出的多级混合变压器(MSHT)在分类精度下实现95.68%,其鲜明地高于最先进的模型。面对对可解释性的需求,MSHT以更准确的关注区域表达其对应物。结果表明,MSHT可以以前所未有的图像规模精确地区分癌症样本,奠定了部署自动决策系统的基础,并在临床实践中扩大玫瑰。代码和记录可在:https://github.com/sagizty/multi-stage-ybrid-transformer。
translated by 谷歌翻译
变压器模型最近已成为自然语言处理中的基础模型之一,作为副产品,最近对扩展这些模型具有重大的兴趣和投资。但是,这些大型变压器语言模型的培训和推理成本令人难以置信,因此需要更多的研究来识别更有效的变体。在这项工作中,我们通过用统计语言建模中的文献启发的变压器体系结构提出了一个简单而有效的修改,该架构是通过通过文本序列的离散潜在表示构建的n-grams来增强模型的。我们评估了我们的模型,关于C4数据集的语言建模的N-Strammer以及Superglue数据集的文本分类,并发现它的表现优于诸如变压器和底漆等几个强基线。我们为JAX中的可重复性目的开放源模型。
translated by 谷歌翻译
为了接近不同的业务目标,在线流量塑造算法旨在改善目标项目的曝光,例如提高新商品的增长。通常,这些算法假设可以通过训练良好的转换速率预测模型访问每个用户项对的实用性。然而,对于真正的电子商务平台,有不可避免的因素阻止我们学习这种准确的模型。为了打破对实用程序的准确输入的沉重依赖,我们提出了一般的在线交通整理协议,用于在线电子商务应用程序。在我们的框架中,我们近似映射奖励得分的函数,这通常是影响排名结果的唯一方法,以对曝光和购买的数量来影响流量整形问题。具体地,我们通过在探索数据点的凸壳上构造的一类转印的线性函数近似上述功能。此外,我们将在线流量整形问题重构为线性编程,其中这些分段线性函数嵌入到目标和约束中。我们的算法可以简单地优化主要空间中的线性编程,并且其解决方案可以简单地应用于随机策略来满足所优化的目标和预期限制。最后,在线A / B测试显示我们所提出的算法稳定地优于先前的工业级流量整形算法。
translated by 谷歌翻译
通过直接将感知输入映射到机器人控制命令中,深入的强化学习(DRL)算法已被证明在机器人导航中有效,尤其是在未知环境中。但是,大多数现有方法忽略导航中的局部最小问题,从而无法处理复杂的未知环境。在本文中,我们提出了第一个基于DRL的导航方法,该方法由具有连续动作空间,自适应向前模拟时间(AFST)的SMDP建模,以克服此问题。具体而言,我们通过修改其GAE来更好地估计SMDP中的策略梯度,改善了指定SMDP问题的分布式近端策略优化(DPPO)算法。我们在模拟器和现实世界中评估了我们的方法。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译
The visual dimension of cities has been a fundamental subject in urban studies, since the pioneering work of scholars such as Sitte, Lynch, Arnheim, and Jacobs. Several decades later, big data and artificial intelligence (AI) are revolutionizing how people move, sense, and interact with cities. This paper reviews the literature on the appearance and function of cities to illustrate how visual information has been used to understand them. A conceptual framework, Urban Visual Intelligence, is introduced to systematically elaborate on how new image data sources and AI techniques are reshaping the way researchers perceive and measure cities, enabling the study of the physical environment and its interactions with socioeconomic environments at various scales. The paper argues that these new approaches enable researchers to revisit the classic urban theories and themes, and potentially help cities create environments that are more in line with human behaviors and aspirations in the digital age.
translated by 谷歌翻译
Deploying reliable deep learning techniques in interdisciplinary applications needs learned models to output accurate and ({even more importantly}) explainable predictions. Existing approaches typically explicate network outputs in a post-hoc fashion, under an implicit assumption that faithful explanations come from accurate predictions/classifications. We have an opposite claim that explanations boost (or even determine) classification. That is, end-to-end learning of explanation factors to augment discriminative representation extraction could be a more intuitive strategy to inversely assure fine-grained explainability, e.g., in those neuroimaging and neuroscience studies with high-dimensional data containing noisy, redundant, and task-irrelevant information. In this paper, we propose such an explainable geometric deep network dubbed as NeuroExplainer, with applications to uncover altered infant cortical development patterns associated with preterm birth. Given fundamental cortical attributes as network input, our NeuroExplainer adopts a hierarchical attention-decoding framework to learn fine-grained attentions and respective discriminative representations to accurately recognize preterm infants from term-born infants at term-equivalent age. NeuroExplainer learns the hierarchical attention-decoding modules under subject-level weak supervision coupled with targeted regularizers deduced from domain knowledge regarding brain development. These prior-guided constraints implicitly maximizes the explainability metrics (i.e., fidelity, sparsity, and stability) in network training, driving the learned network to output detailed explanations and accurate classifications. Experimental results on the public dHCP benchmark suggest that NeuroExplainer led to quantitatively reliable explanation results that are qualitatively consistent with representative neuroimaging studies.
translated by 谷歌翻译
Domain adaptive detection aims to improve the generalization of detectors on target domain. To reduce discrepancy in feature distributions between two domains, recent approaches achieve domain adaption through feature alignment in different granularities via adversarial learning. However, they neglect the relationship between multiple granularities and different features in alignment, degrading detection. Addressing this, we introduce a unified multi-granularity alignment (MGA)-based detection framework for domain-invariant feature learning. The key is to encode the dependencies across different granularities including pixel-, instance-, and category-levels simultaneously to align two domains. Specifically, based on pixel-level features, we first develop an omni-scale gated fusion (OSGF) module to aggregate discriminative representations of instances with scale-aware convolutions, leading to robust multi-scale detection. Besides, we introduce multi-granularity discriminators to identify where, either source or target domains, different granularities of samples come from. Note that, MGA not only leverages instance discriminability in different categories but also exploits category consistency between two domains for detection. Furthermore, we present an adaptive exponential moving average (AEMA) strategy that explores model assessments for model update to improve pseudo labels and alleviate local misalignment problem, boosting detection robustness. Extensive experiments on multiple domain adaption scenarios validate the superiority of MGA over other approaches on FCOS and Faster R-CNN detectors. Code will be released at https://github.com/tiankongzhang/MGA.
translated by 谷歌翻译